18 research outputs found

    Empowering the trustworthiness of ML-based critical systems through engineering activities

    Full text link
    This paper reviews the entire engineering process of trustworthy Machine Learning (ML) algorithms designed to equip critical systems with advanced analytics and decision functions. We start from the fundamental principles of ML and describe the core elements conditioning its trust, particularly through its design: namely domain specification, data engineering, design of the ML algorithms, their implementation, evaluation and deployment. The latter components are organized in an unique framework for the design of trusted ML systems.Comment: This work has been supported by the French government under the "France 2030" program, as part of the SystemX Technological Research Institute Research Institut

    Simulation architecture definition for complex systems design: A tooled methodology

    Get PDF
    International audienceFor the design of complex systems like in the automotive industry, the use of Model Based Systems Engineering (MBSE) is being considered as a promising solution in order to formalize and communicate information. Numerical simulation is also routinely used as a tool to answer potential design questions that arise. However the link between MBSE and simulation still needs further improvement. In this work, a tooled methodology is proposed in order to enhance the link between system architecture and numerical simulation. In a first step, a solicitation package is formalized and implemented in a SysML-based tool to define the simulation needs. In a second step, a tool that allows to define the simulation architecture and to pilot the execution of the simulation is developed. We show that thanks to the proposed process and exchange format between the system and simulation architects, model reuse and agility is improved in a complex systems design

    Modeling, analysis, and optimization of the separation of a space rocket from a carrier aircraft

    No full text
    Un système de lancement aéroporté est constitué d'un porteur de type avion larguant un lanceur à une certaine altitude. De tels systèmes sont l'objet d'un intérêt croissant, notamment pour la mise à poste de petits satellites. Les travaux présentés dans cette thèse s'intègrent dans le programme Perseus du CNES qui a déjà donné lieu à la construction d'un modèle réduit appelé EOLE. Il s'agit d'étudier la phase de largage, particulièrement sensible.Les contraintes de similitude pouvant permettre l'étude du largage taille réelle avec EOLE sont d'abord identifiées. Les possibilités d'extrapolation directe et déterministe des mesures réalisées avec EOLE étant limitées par le non respect d'une contrainte de masse, il est choisi d'étudier le largage avec une approche probabiliste en développant un nouveau modèle multi-corps. Une grande variété d'incertitudes est prise en compte, concernant par exemple aussi bien les interactions aérodynamiques que le mécanisme de séparation. Un nouveau critère de performance générique,basé sur des géométries élémentaires, est développé pour évaluer la fiabilité du largage.L'analyse de sensibilité du largage aux facteurs d'incertitude est ensuite réalisée. Compte tenu du nombre élevé de paramètres en jeu et du temps de simulation, il est d'abord recherché une simplification du modèle. La méthode de Morris est utilisée pour identifier des facteurs d'incertitude peu influents pouvant être fixés à une certaine valeur. Cette étape est fréquente, mais il est montré qu'il existe un risque important de fixer des facteurs dont l'influence a en fait été sous-estimée. Une adaptation de la méthode de Morris améliorant l'échantillonnage des facteurs, le calcul de leurs influences et le traitement statistique des résultats permet de réduire considérablement ce risque.Une fois l'impact des différentes incertitudes estimé, il a été possible d'optimiser les conditions de largage afin de réduire la probabilité qu'un problème intervienne.In an air launch to orbit, a space rocket is launched from a carrier aircraft. Air launchto orbit appears as particularly interesting for small satellites. This Ph.D. thesis is part of the program Pegasus of the French space agency CNES and it follows the development of a small scale demonstrator called EOLE. It focuses on the very sensitive separation phase.The similitude constraints which have to be respected to study the large scale system with EOLEare first identified. A problem of mass limits the possibilities to directly extrapolate at a larger scale, in a deterministic approach, data obtained with EOLE. It is decided to study the separation in a probabilistic approach by developing a new multi-body model. A great variety of uncertainties are taken into account, from the aerodynamic interactions to the atmospheric turbulences, the separation mechanism, and the launch trajectories. A new performance criterion is developed to quantify the safety of the separation phase. It is based on elementary geometries and it could beused in other contexts.A sensitivity analysis is applied to estimate the influence of the uncertainties on the performance criterion. Given the large number of factors of uncertainty and the non-negligible simulation time,the model is first simplified. The Morris method is applied to identify the factors with a low influence which can be fixed to a given value. It is a frequent step, but it is shown that there isa high risk to fix the wrong factors. Any further study would then be altered. The risk to fix the wrong factors is significantly reduced by improving the factors sampling, the calculation of their influence, and the statistical treatment of the results. This new method is used to estimate the influence of the uncertainties at the separation and the safety is improved by optimizing launch trajectories

    Identification of Systems With Similar Chains of Components for Simulation Reuse

    No full text
    International audienceSimulation is an essential tool to evaluate a complex system's behavior. Simulation reuse can potentially improve simulation quality, cost, and delivery. However, identifying reusable simulations is a difficult task, often manual and based on limited information. This paper presents a method to facilitate the reuse of specific parts of past simulations. The system to simulate is compared to systems which have already been simulated. The comparison, which permits to identify similar chains of components in the systems' block diagrams, is formalized as inexact graph matching. The comparison takes into account standardized tags as well as block properties defined by a name, a value, and a unit. Similar systems are identified with a limited computational cost. When two systems are similar, a mapping between their components and interactions can be obtained at a higher computational cost. A software prototype is implemented to perform the necessary computations, visualize the results, and accordingly select the simulation parts to reuse. The software prototype is tested with the block diagram of an autonomous electric car. Similar chains of components are successfully identified in the powertrain of a non-autonomous electric car represented in a past simulation. The corresponding part of the past simulation can then be selected for reuse

    Improving simulation specification with MBSE for better simulation validation and reuse

    No full text
    International audienceA simulation can be a complex architecture of simulation models, simulation tools, and computing hardware. However, its development often relies on informal procedures and can begin without a clear, complete, and formal definition of the simulation needs. Simulation traceability is then compromised, which prevents from easily validating whether a simulation meets the needs, or understanding the purpose of a simulation model that can be reused. This paper proposes an approach to improve the definition of simulation needs using Model-Based Systems Engineering. Based on the semi-automatic processing of a system architecture, it presents a new method to formulate a so-called “simulation request” which covers (1) the part of the system to be simulated; (2) the objective of the simulation; (3) the simulation quality, cost, and delivery; (4) the test scenarios; (5) the data for simulation calibration and validation; and (6) the verification and validation of the simulation. All the tooling required for the formulation of the simulation request were prototyped in a SysML editor, with machine learning capabilities for the choice of test scenarios. The method and tooling were tested for the case of an autonomous car passing under traffic lights

    Empowering the trustworthiness of ML-based critical systems through engineering activities

    No full text
    This paper reviews the entire engineering process of trustworthy Machine Learning (ML) algorithms designed to equip critical systems with advanced analytics and decision functions. We start from the fundamental principles of ML and describe the core elements conditioning its trust, particularly through its design: namely domain specification, data engineering, design of the ML algorithms, their implementation, evaluation and deployment. The latter components are organized in an unique framework for the design of trusted ML systems

    A tooled methodology for the system architect's needs in simulation with autonomous driving application

    No full text
    International audienceModel Based Systems Engineering (MBSE) is a promising solution to formalize and communicate information about the design of complex systems, in particular for the automotive industry which faces new challenges associated to autonomous driving. Numerical simulation is commonly used to support the design of these complex systems, but the possible relations with MBSE should be further investigated. This work, conducted with academic and industrial partners at the research institute IRT SystemX, aims at further bridging the gap between system architecture and numerical simulation. An industrial design problem related to the design of an autonomous vehicle passing traffic lights is used to validate and illustrate new methods and tools based on SysML. Their aim is to: 1) guide the formulation of a question requiring simulation, called solicitation, by the system architect 2) guide the design of a simulation architecture by the simulation architect with a special focus on the consistency with the system. A Java plugin was developed in the SysML editor Papyrus for the solicitation, and a SysML metamodel was defined for the simulation architecture. The solicitation associated to the industrial design problem is answered by a multiobjective optimization of the vehicle's cost and electrical consumption using a co-simulation between the tools Simulink and Amesim

    Towards a holistic approach for AI trustworthiness assessment based upon aids for multi-criteria aggregation

    No full text
    International audienceThe assessment of AI-based systems trustworthiness is a challenging process given the complexity of the subject which involves qualitative and quantifiable concepts, a wide heterogeneity and granularity of attributes, and in some cases even the non-commensurability of the latter. Evaluating trustworthiness of AI-enabled systems is in particular decisive in safety-critical domains where AIs are expected to mostly operate autonomously. To overcome these issues, the Confiance.ai program [1] proposes an innovative solution based upon a multi-criteria decision analysis. The approach encompasses several phases: structuring trustworthiness as a set of well-defined attributes, the exploration of attributes to determine related performance metrics (or indicators), the selection of assessment methods or control points, and structuring a multi-criteria aggregation method to estimate a global evaluation of trust. The approach is illustrated by applying some performance metrics to a data-driven AI context whereas the focus on aggregation methods is left as a near-term perspective of Confiance.ai milestones

    An overview of key trustworthiness attributes and KPIs for trusted ML-based systems engineering

    No full text
    International audienceWhen deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, andto meet user expectations in terms of quality and continuity of service. For instance, the users expect that the technology willnot do something it is not supposed to do, e.g., performing actions without informing users. Thus, the use of ArtificialIntelligence (AI) in safety-critical systems such as in avionics, mobility, defense, and healthcare requires proving theirtrustworthiness through out its overall lifecycle (from design to deployment). Based on surveys on quality measures, characteristics and sub-characteristics of AI systems, the Confiance. ai program (www.confiance.ai) aims to identify the relevanttrustworthiness attributes and their associated Key Performance Indicators (KPI) or their associated methods for assessingthe induced level of trust
    corecore